# Complex Problem Decomposition
Deepseek R1 Distill Phi 3 Mini 4k Lorar8 Alpha16 50000samples
MIT
A reasoning model based on Deepseek-R1 knowledge distillation, supporting Chain-of-Thought (CoT) reasoning capabilities
Large Language Model English
D
GPD1
71
4
Synthia 70B V1.5
Synthia-70B-v1.5 is a 70-billion-parameter large language model based on the Llama2 architecture, focusing on complex reasoning and coherent responses through the Tree of Thought method.
Large Language Model
Transformers

S
migtissera
99
42
Featured Recommended AI Models